Goto

Collaborating Authors

 college admission




The Unfairness of $\varepsilon$-Fairness

Fadina, Tolulope, Schmidt, Thorsten

arXiv.org Machine Learning

Fairness in decision-making processes is often quantified using probabilistic metrics. However, these metrics may not fully capture the real-world consequences of unfairness. In this article, we adopt a utility-based approach to more accurately measure the real-world impacts of decision-making process. In particular, we show that if the concept of $\varepsilon$-fairness is employed, it can possibly lead to outcomes that are maximally unfair in the real-world context. Additionally, we address the common issue of unavailable data on false negatives by proposing a reduced setting that still captures essential fairness considerations. We illustrate our findings with two real-world examples: college admissions and credit risk assessment. Our analysis reveals that while traditional probability-based evaluations might suggest fairness, a utility-based approach uncovers the necessary actions to truly achieve equality. For instance, in the college admission case, we find that enhancing completion rates is crucial for ensuring fairness. Summarizing, this paper highlights the importance of considering the real-world context when evaluating fairness.


A Moral Imperative: The Need for Continual Superalignment of Large Language Models

Puthumanaillam, Gokul, Vora, Manav, Thangeda, Pranay, Ornik, Melkior

arXiv.org Artificial Intelligence

This paper examines the challenges associated with achieving life-long superalignment in AI systems, particularly large language models (LLMs). Superalignment is a theoretical framework that aspires to ensure that superintelligent AI systems act in accordance with human values and goals. Despite its promising vision, we argue that achieving superalignment requires substantial changes in the current LLM architectures due to their inherent limitations in comprehending and adapting to the dynamic nature of these human ethics and evolving global scenarios. We dissect the challenges of encoding an ever-changing spectrum of human values into LLMs, highlighting the discrepancies between static AI models and the dynamic nature of human societies. To illustrate these challenges, we analyze two distinct examples: one demonstrates a qualitative shift in human values, while the other presents a quantifiable change. Through these examples, we illustrate how LLMs, constrained by their training data, fail to align with contemporary human values and scenarios. The paper concludes by exploring potential strategies to address and possibly mitigate these alignment discrepancies, suggesting a path forward in the pursuit of more adaptable and responsive AI systems.


Augmenting Holistic Review in University Admission using Natural Language Processing for Essays and Recommendation Letters

Lee, Jinsook, Thymes, Bradon, Zhou, Joyce, Joachims, Thorsten, Kizilcec, Rene F.

arXiv.org Artificial Intelligence

University admission at many highly selective institutions uses a holistic review process, where all aspects of the application, including protected attributes (e.g., race, gender), grades, essays, and recommendation letters are considered, to compose an excellent and diverse class. In this study, we empirically evaluate how influential protected attributes are for predicting admission decisions using a machine learning (ML) model, and in how far textual information (e.g., personal essay, teacher recommendation) may substitute for the loss of protected attributes in the model. Using data from 14,915 applicants to an undergraduate admission office at a selective U.S. institution in the 2022-2023 cycle, we find that the exclusion of protected attributes from the ML model leads to substantially reduced admission-prediction performance. The inclusion of textual information via both a TF-IDF representation and a Latent Dirichlet allocation (LDA) model partially restores model performance, but does not appear to provide a full substitute for admitting a similarly diverse class. In particular, while the text helps with gender diversity, the proportion of URM applicants is severely impacted by the exclusion of protected attributes, and the inclusion of new attributes generated from the textual information does not recover this performance loss.


ChatGPT is not the end of written integrity - The Georgetown Voice

#artificialintelligence

When the first capable version of ChatGPT was released in November 2022, professors across the internet bemoaned the death of the undergraduate essay as a method to assess students. The Atlantic called the moment a "textpocalypse" and a writer from The New York Times said he was "deeply unsettled" following a conversation with Bing's integrated AI chatbot. ChatGPT, unlike earlier chatbots, has the capacity to generate coherent, long-form writing. ChatGPT has upended what it means to write. But, upon further analysis, it may not be the game-changer for writing or other industries that the world initially envisioned.


The Real A.I. In College Admission

#artificialintelligence

I know what you're thinking. "Another article about ChatGPT (Generative Pre-trained Transformer), the artificial intelligence wonder-bot from OpenAI, and how it is going to revolutionize society, work, education, and more." Perhaps it will, but that is not this article. The rise of this extraordinary technology, rather than muddling it, makes it clearer than ever what constitutes authenticity. If you or your child are applying to college, you have undoubtedly heard an admission officer talk about authenticity.


Global Big Data Conference

#artificialintelligence

When hiring, many organizations use artificial intelligence tools to scan resumes and predict job-relevant skills. Colleges and universities use AI to automatically score essays, process transcripts and review extracurricular activities to predetermine who is likely to be a "good student." With so many unique use-cases, it is important to ask: can AI tools ever be truly unbiased decision-makers? In response to claims of unfairness and bias in tools used in hiring, college admissions, predictive policing, health interventions, and more, the University of Minnesota recently developed a new set of auditing guidelines for AI tools. The auditing guidelines, published in the American Psychologist, were developed by Richard Landers, associate professor of psychology at the University of Minnesota, and Tara Behrend from Purdue University.


Meaningful Standards for Auditing High-Stakes Artificial Intelligence

#artificialintelligence

When hiring, many organizations use artificial intelligence tools to scan resumes and predict job-relevant skills. Colleges and universities use AI to automatically score essays, process transcripts and review extracurricular activities to predetermine who is likely to be a "good student." With so many unique use-cases, it is important to ask: can AI tools ever be truly unbiased decision-makers? In response to claims of unfairness and bias in tools used in hiring, college admissions, predictive policing, health interventions, and more, the University of Minnesota recently developed a new set of auditing guidelines for AI tools. The auditing guidelines, published in the American Psychologist, were developed by Richard Landers, associate professor of psychology at the University of Minnesota, and Tara Behrend from Purdue University.


Learning Strategies in Decentralized Matching Markets under Uncertain Preferences

Dai, Xiaowu, Jordan, Michael I.

arXiv.org Machine Learning

We study two-sided decentralized matching markets in which participants have uncertain preferences. We present a statistical model to learn the preferences. The model incorporates uncertain state and the participants' competition on one side of the market. We derive an optimal strategy that maximizes the agent's expected payoff and calibrate the uncertain state by taking the opportunity costs into account. We discuss the sense in which the matching derived from the proposed strategy has a stability property. We also prove a fairness property that asserts that there exists no justified envy according to the proposed strategy. We provide numerical results to demonstrate the improved payoff, stability and fairness, compared to alternative methods.